25 research outputs found
Dynamic pricing with online learning of customer arrival rate and acceptable price
In many industries, the price of commodities are adjusted by taking into account the current level of inventory and the distribution of future demand. This has motivated studies in the area of dynamic pricing, which among others have been successfully applied in airline industry. A common assumption in these studies is that the distribution of future demand is known in advance. While sometimes this
distribution can be learned from historical data in advance, there are cases where the selling scenario has unique characteristics and the learning can only be achieved as the selling process is going on. In this note we will use Bayesian learning to update our belief about two distributions: The distribution of customer arrivals and the distribution of acceptable price for customers
Learning Fair Naive Bayes Classifiers by Discovering and Eliminating Discrimination Patterns
As machine learning is increasingly used to make real-world decisions, recent
research efforts aim to define and ensure fairness in algorithmic decision
making. Existing methods often assume a fixed set of observable features to
define individuals, but lack a discussion of certain features not being
observed at test time. In this paper, we study fairness of naive Bayes
classifiers, which allow partial observations. In particular, we introduce the
notion of a discrimination pattern, which refers to an individual receiving
different classifications depending on whether some sensitive attributes were
observed. Then a model is considered fair if it has no such pattern. We propose
an algorithm to discover and mine for discrimination patterns in a naive Bayes
classifier, and show how to learn maximum likelihood parameters subject to
these fairness constraints. Our approach iteratively discovers and eliminates
discrimination patterns until a fair model is learned. An empirical evaluation
on three real-world datasets demonstrates that we can remove exponentially many
discrimination patterns by only adding a small fraction of them as constraints
A unifying framework for fairness-aware influence maximization
The problem of selecting a subset of nodes with greatest influence in a graph, commonly known as influence maximization, has been well studied over the past decade. This problem has real world applications which can potentially affect lives of individuals. Algorithmic decision making in such domains raises concerns about their societal implications. One of these concerns, which surprisingly has only received limited attention so far, is algorithmic bias and fairness. We propose a flexible framework that extends and unifies the existing works in fairness-aware influence maximization. This framework is based on an integer programming formulation of the influence maximization problem. The fairness requirements are enforced by adding linear constraints or modifying the objective function. Contrary to the previous work which designs specific algorithms for each variant, we develop a formalism which is general enough for specifying different notions of fairness. A problem defined in this formalism can be then solved using efficient mixed integer programming solvers. The experimental evaluation indicates that our framework not only is general but also is competitive with existing algorithms
Stochastic Constraint Programming with And-Or Branch-and-Bound
Complex multi-stage decision making problems often involve uncertainty, for example, regarding demand
or processing times. Stochastic constraint programming was proposed as a way to formulate
and solve such decision problems, involving arbitrary constraints over both decision and random
variables. What stochastic constraint programming currently lacks is support for the use of factorized
probabilistic models that are popular in the graphical model community. We show how a state-ofthe-art
probabilistic inference engine can be integrated into standard constraint solvers. The resulting
approach searches over the And-Or search tree directly, and we investigate tight bounds on the expected
utility objective. This significantly improves search efficiency and outperforms scenario-based
methods that ground out the possible worlds.status: publishe
Enhancing scalability of peer-to-peer energy markets using adaptive segmentation method
This paper proposes an adaptive segmentation method as a market clearing mechanism for peer-to-peer (P2P) energy trading scheme with large number of market players. In the proposed method, market players participate in the market by announcing their bids. In the first step, players are assigned to different segments based on their features, where the balanced k-means clustering method is implemented to form segments. These segments are formed based on the similarity between players, where the amount of energy for trade and its corresponding price are considered as features of players. In the next step, a distributed method is employed to clear the market in each segment without any need to private information of players. The novelty of this paper relies on developing an adaptive algorithm for dividing large number of market players into multiple segments to enhance scalability of the P2P trading by reducing data exchange and communication overheads. The proposed approach can be used along with any distributed method for market clearing. In this paper, two different structures including community-based market and decentralized bilateral trading market are used to demonstrate the efficacy of the proposed method. Simulation results show the beneficial properties of the proposed segmentation method
On Constraints, Optimisation, Probability and Data Mining
Constraint satisfaction and optimization (CSP(O)), probabilistic inference, and data mining are three important subdomains of artificial intelligence. CSP(O) investigates methods for efficiently solving combinatorial problems, probabilistic inference deals with answering queries about uncertain knowledge bases, while data mining aims at finding and modeling regularities in the data. Even though these domains have been developed independently, there are strong connections and interactions between them. Studying and extending their interactions is the main theme of this thesis.
This thesis has five contributions. First, we build on existing methods in probabilistic inference and constraint programming and propose a method for adding probabilities to the CSP(O) models. This allows us to constrain or optimize over probability values. Second, we develop a novel algorithm for optimizing the expected utility in stochastic constraint programs. Unlike earlier works that assume independence of random variables, we assume a joint distribution represented by a Bayesian network. Third, we develop an algorithm for obtaining the exact solution of the constrained clustering problem with the maximum sum of squares objective. By using the column-generation framework, we decompose the problem into two components: one that generates candidate clusters that adhere to user constraints, and one that tries to find the optimal solution among combinations of the generated candidates. Fourth, we develop an exact algorithm to solve a special class of graph clustering problems. The formulation of this problem involves an exponential number of constraints. We propose a mechanism to incrementally include only a subset of these constraints in the problem. Fifth, we develop a mechanism for learning the distribution of taxi requests from large datasets of taxi trip records. We show that combining techniques from multiple domains can yield improvements in addressing a number of existing and new problems.status: publishe